Skip to content

fix(openclaw-plugin): run afterTurn auto-capture in background queue#875

Closed
imleon wants to merge 3 commits intovolcengine:mainfrom
imleon:fix/async-afterturn-capture
Closed

fix(openclaw-plugin): run afterTurn auto-capture in background queue#875
imleon wants to merge 3 commits intovolcengine:mainfrom
imleon:fix/async-afterturn-capture

Conversation

@imleon
Copy link
Copy Markdown

@imleon imleon commented Mar 22, 2026

Summary

This PR makes OpenViking auto-capture non-blocking in the OpenClaw context-engine path.

Previously, afterTurn awaited the full auto-capture pipeline (createSession -> addSessionMessage -> extractSessionMemories -> deleteSession), which could delay final message delivery in chat
channels when extraction was slow.

This change moves auto-capture to an in-memory background queue keyed by session, so user-facing reply completion is no longer blocked by capture latency.

What changed

  • examples/openclaw-plugin/context-engine.ts
    • Added sessionKey?: string to afterTurn params typing.
    • Extracted existing capture logic into runAutoCapture(...) (behavior/logging kept intact).
    • Added enqueueAutoCapture(...) with per-session sequential queue:
      • Same session: serialized capture jobs
      • Different sessions: can proceed independently
    • Updated afterTurn(...) to enqueue and return immediately (no synchronous wait on extraction).

Why

In real traffic, slow capture/extraction could create a visible gap between model completion and final message/card completion.
By decoupling capture from the synchronous afterTurn return path, we preserve memory capture while removing user-visible blocking.

Behavior impact

  • User-facing latency: improved (final reply is no longer gated by auto-capture extraction latency)
  • Capture semantics: preserved
  • Logging: preserved (capture-check, auto-captured, capture-detail, error/warn logs)
  • Safety: per-session queue avoids concurrent capture contention for the same session

Validation

Manual log-based validation in Feishu/OpenClaw integration:

  • Before: agent end could be followed by several seconds of openviking auto-capture activity before final deliver called
  • After: deliver called / finalization proceeds immediately; auto-capture runs in background and logs later if needed

Risks / trade-offs

  • In-flight queued capture jobs are in-memory and may be lost on abrupt process crash/restart
  • This PR intentionally prioritizes user-facing response latency and does not introduce persistent job infrastructure

中文说明(简版)

1) 问题背景

当前 afterTurn 会同步等待 auto-capture 全流程(创建会话、写入消息、提取记忆、删除会话)。当提取较慢时,会拖慢用户可见的最终回包/卡片完成时机,出现“模型结束后还要等几秒”的体验问题。

2) 本次改动

将 auto-capture 从同步链路改为后台队列执行:afterTurn 只负责入队并立即返回;实际提取逻辑在后台运行。队列按 sessionKey/sessionId 串行,确保同会话内不并发冲突;原有日志与捕获逻辑保持不变。

3) 效果与取舍

效果是显著降低用户侧感知延迟,最终回包不再被 auto-capture 阻塞。取舍是队列为内存态,进程异常退出时未完成任务可能丢失;本 PR 优先保障在线响应时延,不引入更重的持久化任务系统。

Move OpenViking auto-capture out of the synchronous afterTurn path so message delivery is not blocked by extraction latency. Keep capture behavior/logging intact and process capture jobs sequentially per session via an in-memory queue to avoid concurrent session contention.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Mar 22, 2026

CLA assistant check
All committers have signed the CLA.

@github-actions
Copy link
Copy Markdown

Failed to generate code suggestions for PR

@Mijamind719
Copy link
Copy Markdown
Collaborator

Thank you for your contribution. Regarding memory extraction, we implemented a queue internally within OpenViking to handle asynchronous requests. The latest code for the openclaw plugin has resolved the issue you mentioned.

@github-project-automation github-project-automation bot moved this from Backlog to Done in OpenViking project Mar 31, 2026
@Mijamind719 Mijamind719 self-assigned this Mar 31, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants